Goto

Collaborating Authors

 scalable structure learning


Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Neural Information Processing Systems

Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of a super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning. Instead of sampling and scoring all possible structures individually, we assume the generator of the CTBN to be composed as a mixture of generators stemming from different structures. In this framework, structure learning can be performed via a gradient-based optimization of mixture weights. We combine this approach with a new variational method that allows for a closed-form calculation of this mixture marginal likelihood. We show the scalability of our method by learning structures of previously inaccessible sizes from synthetic and real-world data.


Reviews: Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Neural Information Processing Systems

Summary: Within the manuscript, the authors extend the continuous time Bayesian Networks by incorporating a mixture prior over the conditional intensity matrices, thereby allowing for a larger class compared to a gamma prior usually employed over these. My main concerns are with clarity / quality as the manuscript is quite densely written with quite some material has either been omitted or shifted to the appendix. For a non-expert in continuous time bayesian networks, it is quite hard to read. Additionally, there are quite a few minor mistakes (see below) that make understanding of the manuscript harder. As it stands, Originality: The authors combine variational inference method from Linzner et al [11], with the new prior over the dependency structure (mixture).


Reviews: Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Neural Information Processing Systems

This paper contributes a new technique for the estimation of structure in continuous time Bayesian networks, and completes the picture with an accompanying inference method and an illustration on a real-world problem. There is agreement among reviewers that this is a high quality contribution, if one takes the confidence-weighted scores from reviewers into account. As a point for improvement for the paper, we could reiterate a comment that was raised in the reviewer discussion: "[the paper] is missing reasonable and helpful experimental comparisons that are not hard to do, given that the code exists already in CTBN-RLE" and the authors are encouraged to consider broadening their experimental comparisons for a final published version.


Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Neural Information Processing Systems

Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of a super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning.


Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Linzner, Dominik, Schmidt, Michael, Koeppl, Heinz

Neural Information Processing Systems

Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of a super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning.


Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Linzner, Dominik, Schmidt, Michael, Koeppl, Heinz

arXiv.org Machine Learning

Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning. Instead of sampling and scoring all possible structures individually, we assume the generator of the CTBN to be composed as a mixture of generators stemming from different structures. In this framework, structure learning can be performed via a gradient-based optimization of mixture weights. We combine this approach with a novel variational method that allows for the calculation of the marginal likelihood of a mixture in closed-form. We proof the scalability of our method by learning structures of previously inaccessible sizes from synthetic and real-world data.